Gert Lanckriet
(http://cosmal.ucsd.edu/~gert/)
University of California, San Diego
Wednesday 22nd August 2012
Time: 4pm
B10 Seminar Room, Basement,
Alexandra House, 17 Queen Square, London, WC1N 3AR
Music Recommendation with Multi-Modal Metric Learning to Rank.
A revolution in music production, distribution and consumption made millions of songs available to virtually anyone on the planet, through the Internet. To allow users to retrieve the desired content from this nearly infinite pool of possibilities, algorithms for automatic music indexing and recommendation are a must.
In this talk, I will discuss two aspects of automated content-based music analysis for music search and recommendation: i) automated music tagging for semantic retrieval, and ii) a query-by-example paradigm for content-based music recommendation, wherein a user queries the system by providing a song, and the system responds with a list of relevant or similar song recommendations (e.g., playlist generation for online radio).
Query-by-example applications ultimately depend on the notion of similarity between items to produce high-quality results. Current state-of-the-art systems employ collaborative filter methods to represent musical items, effectively comparing items in terms of their constituent users. While collaborative filter techniques perform well when historical data is available for each item, their reliance on historical data impedes performance on novel or unpopular items. To combat this problem, we rely on content-based similarity, which naturally extends to novel items, but is typically out-performed by collaborative filter methods. In this talk, I will present a method for optimizing content-based similarity by learning from a sample of collaborative filter data. Finally, I will discuss how such algorithms may be adapted to improve recommendations if a variety of information besides musical content is available as well (e.g., music video clips, web documents and/or art work describing musical artists).